在科学计算的许多领域越来越流行的人工神经网络(ANN)的大量使用迅速增加了现代高性能计算系统的能源消耗。新型的神经形态范式提供了一种吸引人的替代方案,它直接在硬件中实施了ANN。但是,对于科学计算中用例使用ANN在神经形态硬件上运行ANN的实际好处知之甚少。在这里,我们提出了一种方法,用于测量使用常规硬件的ANN来计算推理任务的时间。此外,我们为这些任务设计了一个体系结构,并根据最先进的模拟内存计算(AIMC)平台估算了相同的指标,这是神经形态计算中的关键范例之一。在二维凝结物质系统中的量子多体物理学中的用例比较两种方法,并在粒子物理学中大型强子对撞机上以40 MHz的速率以40 MHz的速率进行异常检测。我们发现,与传统硬件相比,AIMC最多可以达到一个较短的计算时间,最高三个数量级的能源成本。这表明使用神经形态硬件进行更快,更可持续的科学计算的潜力。
translated by 谷歌翻译
Location-aware networks will introduce new services and applications for modern convenience, surveillance, and public safety. In this paper, we consider the problem of cooperative localization in a wireless network where the position of certain anchor nodes can be controlled. We introduce an active planning method that aims at moving the anchors such that the information gain of future measurements is maximized. In the control layer of the proposed method, control inputs are calculated by minimizing the traces of approximate inverse Bayesian Fisher information matrixes (FIMs). The estimation layer computes estimates of the agent states and provides Gaussian representations of marginal posteriors of agent positions to the control layer for approximate Bayesian FIM computations. Based on a cost function that accumulates Bayesian FIM contributions over a sliding window of discrete future timesteps, a receding horizon (RH) control is performed. Approximations that make it possible to solve the resulting tree-search problem efficiently are also discussed. A numerical case study demonstrates the intelligent behavior of a single controlled anchor in a 3-D scenario and the resulting significantly improved localization accuracy.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown great potential in the field of graph representation learning. Standard GNNs define a local message-passing mechanism which propagates information over the whole graph domain by stacking multiple layers. This paradigm suffers from two major limitations, over-squashing and poor long-range dependencies, that can be solved using global attention but significantly increases the computational cost to quadratic complexity. In this work, we propose an alternative approach to overcome these structural limitations by leveraging the ViT/MLP-Mixer architectures introduced in computer vision. We introduce a new class of GNNs, called Graph MLP-Mixer, that holds three key properties. First, they capture long-range dependency and mitigate the issue of over-squashing as demonstrated on the Long Range Graph Benchmark (LRGB) and the TreeNeighbourMatch datasets. Second, they offer better speed and memory efficiency with a complexity linear to the number of nodes and edges, surpassing the related Graph Transformer and expressive GNN models. Third, they show high expressivity in terms of graph isomorphism as they can distinguish at least 3-WL non-isomorphic graphs. We test our architecture on 4 simulated datasets and 7 real-world benchmarks, and show highly competitive results on all of them.
translated by 谷歌翻译
Detecting actions in untrimmed videos should not be limited to a small, closed set of classes. We present a simple, yet effective strategy for open-vocabulary temporal action detection utilizing pretrained image-text co-embeddings. Despite being trained on static images rather than videos, we show that image-text co-embeddings enable openvocabulary performance competitive with fully-supervised models. We show that the performance can be further improved by ensembling the image-text features with features encoding local motion, like optical flow based features, or other modalities, like audio. In addition, we propose a more reasonable open-vocabulary evaluation setting for the ActivityNet data set, where the category splits are based on similarity rather than random assignment.
translated by 谷歌翻译
The word alignment task, despite its prominence in the era of statistical machine translation (SMT), is niche and under-explored today. In this two-part tutorial, we argue for the continued relevance for word alignment. The first part provides a historical background to word alignment as a core component of the traditional SMT pipeline. We zero-in on GIZA++, an unsupervised, statistical word aligner with surprising longevity. Jumping forward to the era of neural machine translation (NMT), we show how insights from word alignment inspired the attention mechanism fundamental to present-day NMT. The second part shifts to a survey approach. We cover neural word aligners, showing the slow but steady progress towards surpassing GIZA++ performance. Finally, we cover the present-day applications of word alignment, from cross-lingual annotation projection, to improving translation.
translated by 谷歌翻译
How do we know when the predictions made by a classifier can be trusted? This is a fundamental problem that also has immense practical applicability, especially in safety-critical areas such as medicine and autonomous driving. The de facto approach of using the classifier's softmax outputs as a proxy for trustworthiness suffers from the over-confidence issue; while the most recent works incur problems such as additional retraining cost and accuracy versus trustworthiness trade-off. In this work, we argue that the trustworthiness of a classifier's prediction for a sample is highly associated with two factors: the sample's neighborhood information and the classifier's output. To combine the best of both worlds, we design a model-agnostic post-hoc approach NeighborAgg to leverage the two essential information via an adaptive neighborhood aggregation. Theoretically, we show that NeighborAgg is a generalized version of a one-hop graph convolutional network, inheriting the powerful modeling ability to capture the varying similarity between samples within each class. We also extend our approach to the closely related task of mislabel detection and provide a theoretical coverage guarantee to bound the false negative. Empirically, extensive experiments on image and tabular benchmarks verify our theory and suggest that NeighborAgg outperforms other methods, achieving state-of-the-art trustworthiness performance.
translated by 谷歌翻译
Prior work has shown that Visual Recognition datasets frequently underrepresent bias groups $B$ (\eg Female) within class labels $Y$ (\eg Programmers). This dataset bias can lead to models that learn spurious correlations between class labels and bias groups such as age, gender, or race. Most recent methods that address this problem require significant architectural changes or additional loss functions requiring more hyper-parameter tuning. Alternatively, data sampling baselines from the class imbalance literature (\eg Undersampling, Upweighting), which can often be implemented in a single line of code and often have no hyperparameters, offer a cheaper and more efficient solution. However, these methods suffer from significant shortcomings. For example, Undersampling drops a significant part of the input distribution while Oversampling repeats samples, causing overfitting. To address these shortcomings, we introduce a new class conditioned sampling method: Bias Mimicking. The method is based on the observation that if a class $c$ bias distribution, \ie $P_D(B|Y=c)$ is mimicked across every $c^{\prime}\neq c$, then $Y$ and $B$ are statistically independent. Using this notion, BM, through a novel training procedure, ensures that the model is exposed to the entire distribution without repeating samples. Consequently, Bias Mimicking improves underrepresented groups average accuracy of sampling methods by 3\% over four benchmarks while maintaining and sometimes improving performance over non sampling methods. Code can be found in https://github.com/mqraitem/Bias-Mimicking
translated by 谷歌翻译
对于空中机器人来说,以快速而健壮的方式倒置着陆是一项艰巨的壮举,尤其是完全取决于板载感应和计算。尽管如此,这项壮举通常由蝙蝠,苍蝇和蜜蜂等生物传单进行。我们以前的工作已经确定了一系列板载视觉提示与运动学动作之间的直接因果关系,这些关系允许在小型空中机器人中可靠地执行这种具有挑战性的特技操纵。在这项工作中,我们首先利用深入的强化学习和基于物理的模拟来获得从任何任意方法条件开始的一般最佳控制策略,以实现强大的倒置着陆。这项优化的控制策略提供了从系统的观察空间到其电动机命令动作空间的计算效率映射,包括触发和控制旋转操作。这是通过训练系统在大量和方向变化的大量进式飞行速度上进行训练。接下来,我们通过在仿真中改变了机器人的惯性参数,通过域随机化对学习策略进行了模拟策略的传输和实验验证。通过实验试验,我们确定了几个主要因素,这些因素极大地改善了着陆鲁棒性和确定倒置成功的主要机制。我们希望这项研究中开发的学习框架可以推广以解决更具挑战性的任务,例如利用嘈杂的板载感觉数据,降落在各种方向的表面上或降落在动态移动的表面上。
translated by 谷歌翻译
从图像中产生短篇小说是艰巨的。与图像字幕不同,来自图像的故事产生构成了多个挑战:保持故事连贯性,适当评估故事的质量,将生成的故事转向某种风格,并解决图像故事对的参考数据集的稀缺性,以限制训练期间的训练监督。在这项工作中,我们介绍了插件的故事讲述者(PPST),并通过以下方式改进图像到故事的生成:1)通过合并大型预培训模型,即剪辑和GPT-2来减轻数据稀缺问题,以促进通过最少的监督,流利的图像到文本一代,以及2)通过合并风格适配器来控制故事的生成,从而实现了更相关的一代。我们通过非风格,浪漫风格和动作风格的PPST进行图像到故事的生成实验,并将我们生成的故事与以前的故事进行比较三个方面的故事,即故事连贯性,图像故事相关性和风格和风格健身,使用自动和人类评估。结果表明,PPST提高了故事的连贯性,并且具有更好的图像故事相关性,但尚未充分风格。
translated by 谷歌翻译
随着神经网络的不断扩展,对其财产的完整和合理验证的需求变得至关重要。近年来,确定二进制神经网络(BNN)在布尔逻辑中具有等效的表示,并且可以使用诸如SAT求解器之类的逻辑推理工具进行正式分析。但是,迄今为止,只能将BNN转换为SAT公式。在这项工作中,我们介绍了真实表深卷积神经网络(TTNETS),这是一个新的sat-odsody型号,首次是现实价值的重量。此外,它通过构造承认,在稳健性验证设置中,包括调节后和拖延性,包括后调整功能。后一种属性导致比BNN更紧凑的SAT符号编码。这使使用一般SAT求解器的使用使属性验证更加容易。我们证明了TTNET关于形式鲁棒性属性的值:TTNET在具有可比的计算时间的所有BNN的验证精度上优于验证的准确性。更普遍地,它们代表了所有已知的完整验证方法之间的相关权衡:TTNET在快速验证时间内实现了高验证的精度,并且没有超时。在这里,我们正在探索TTNET的概念证明,以实现非常重要的应用(稳健性的完整验证),我们相信这个新颖的实现的网络构成了对功能正式验证需求不断增长的实际响应。我们假设TTNET可以应用于各种基于CNN的架构,并将其扩展到其他属性,例如公平性,故障攻击和精确规则提取。
translated by 谷歌翻译